Design/Process Learning from Production Test

نویسندگان

  • David Abercrombie
  • Bernd Koenemann
چکیده

Modern Design-For-Test (DFT) practices not only simplify test generation but also make it much easier to diagnose problems uncovered in production test. In fact, many diagnostics steps can be automated enough to enable batch processing of large quantities of fail data captured during product ramp and volume production. Hidden in these fail data is very valuable information about the product design, manufacturing process, and interactions between the two. The embedded tutorial will provide an overview of some of the analysis methods that are being used and/or prototyped in the industry, as well as the underlying data sharing between the design and manufacturing areas that is required for and enabled by the analyses. Introduction Design and manufacturing used to exist in two more or less separate worlds. The “fastest path from RTL to GDSII” mantra of the Electronic Design Automation (EDA) industry neglects that silicon is not printed from GDSII and that production test generally stands between the design and shipping working products. With modern sub-wavelength lithography, the path from GDSII to silicon includes increasingly complex and oftentimes finicky shapes enhancement algorithms for mask preparation. Even with all this effort, the final shapes ending up on the silicon are no longer entirely faithful reproductions of the GDSII. And, not only lateral shapes are subject to distortions. Copper interconnect structures, for example, are experience thickness variations caused by density-dependent effects in Chemical Mechanical Polishing (CMP). These and other emerging processing idiosyncrasies lead to sometimes undesirable parametric consequences in electrical behavior. How significant the parametric aberrations are, depends to a certain degree on how well the design structures anticipate the modifications and variations in manufacturing and how well the extraction/verification tools predict the modifications and variations. In other words, modern design tools have to increasingly be familiar with and understand the manufacturing processes and how they interact with the design intent. Manufacturing, similarly, has to live with the fact that yield no longer is primarily dominated by catastrophic defects due to process issues like dirt or impurities, but also by parametric defects arising from design-process interactions. Yield ramp and yield management, consequently can no longer exist exclusively in the process domain, but must equally focus on design-related issues. Electrical test at wafer or final sort is the place where the rubber of design specific parametric behavior first meets the road of silicon reality. It is production test where the statistical impact of certain design weaknesses first has an opportunity to manifest itself. Characterization test may be more thorough but does not test involve samples for meaningful statistical analysis. Analyzing large quantities of production test fails, hence, is the first chance to understand and subsequently alleviate the root causes for the fails. With modern diagnostic analysis techniques, manufacturing can be transformed from a “simple” pass/fail screening operation to an increasingly valuable data acquisition and design/process learning facility. About Defects Defects come in many forms and have many different root causes. Historically, manufacturing yield loss was largely dominated by random particle contamination (60% of the loss in mature 350nm processes, according to [1]). The particles interfere with the proper formation of silicon structures. Figure 1 shows an example of how a particle on a bare wafer leads to malformed interconnects after CMP. Figure 1: Particle-induced defect [2] This type of defect can be characterized as random because particles settle on the silicon surface randomly. If a particle happens to hit an area with no circuit structures in close enough proximity, then the particle has no effect on the functionality of the final chip and may be considered “harmless”. On the other hand, if the particle of a certain size hits a so called “critical area” such that it overlaps with or sufficiently encroaches on nearby circuit structures, then the particle could cause malformations of the circuit elements. The yield impact of particles depends on the defect density (number of particles per unit area), their size distribution, and the critical area on the chip. Figure 2 illustrates the critical area concept for particles that can lead to additional material. Figure 2: Critical area for additional material Yield loss due to such randomly distributed particles or impurities sometimes is also referred to as defect-limited loss or area-based yield loss [1]. Another important characteristic of the particle defect shown in Figure 1 is that the particle is so called “visual defect” that can be seen by in-line wafer inspection equipment. Such equipment performs surface scans of selected wafers at selected processing steps in the fabricator. The surface scan images are optionally stored for future reference, and they are post-processed by image analysis software to identify defect locations and defect sizes. The locations (x, y, layer, etc.), sizes, and possibly other classifications are stored in a so called defect map. In-line inspection and defect maps are one vehicle used for continuously monitoring defect densities and defect distributions in a wafer processing facility. Not all defects are random in nature and affect all wafer areas indiscriminately, but are more systematic in nature and may affect only certain design features or feature combinations. It is predicted that feature-based yield will become increasingly more significant than area-based yield loss for sub-wavelength technologies (see Figure 3). Figure 3: Predicted evolution of area-based versus feature-based yields [1] The extreme depth versus width aspect ratio of via structures in modern technologies makes it difficult to reliably establish contact between the interconnect layers and resistive opens in via structures have become a notorious yield issue. Sub-wavelength lithography effects, even with Resolution Enhancement Techniques like OPC, are another example for a potential systemic defect source. Sub-wavelength lithography effects impair the fidelity with which the intended layout features expressed in the GDSII can be printed onto the silicon. Figure 4 shows an overlay some intended layout features (solid shapes) and the predicted silicon image (white outlines) printed from a RET-enhanced mask. Figure 4: Comparison of intended layout features and predicted silicon image (white outlines) [3] Despite a high effort expended in RET to preserve the features as faithfully as possible, the predicted actual silicon shapes are not an exact match. The formation of side-lobes in the actual silicon changes the parasitic coupling capacitances and increases the likelihood of shorts between some adjacent features. Hence, both the parametric electrical behavior and the defect-sensitivity of the circuit are impacted in a systematic fashion (i.e., only for certain combinations of layout features). In addition, gate density and new fabrication techniques like dual damascene copper interconnect produce a variety of more or less subtle defect types, such as antennae effects, metal erosion, stress voids, resistive vias and micro-bridges. A couple of examples are depicted in Figure 4a. Fig. 4a. Nanometer test challenges: Small geometries and new materials create new defect types, such as resistive vias (left) and opens (right). In addition to visible physical defect manifestations, other circuit failures can arise from electrical interactions and effects. Cross-talk between neighboring interconnect lines or noise coupling between circuits are examples for such defects. Accurately predicting circuit behavior during design is getting increasingly complex and difficult. The mismatch between silicon and layout shapes, as illustrated in Figure 4, is only one of many effects that can render parasitics extracted from the design data inaccurate and lead to unpredictable marginalities. Although systemic in nature, such effects can conspire with random interand intra-die process variations and signal integrity issues to create seemingly random parametric problems. To make matters worse, the parametric defects and some nonparametric physical processing problems are entirely invisible to in-line inspection tools traditionally used for defect monitoring. The International Technology Roadmap for Semiconductors (ITRS 2003) predicts that parametric and non-visual random defects will become increasingly significant and cumbersome for advanced process nodes [4]. Traditional Defect Learning: Defect Maps and Test Structures The already mentioned surface scan equipment is one method to more or less directly find out about certain defects in the process. However the surface scan alone cannot predict well enough which of the observed defects will actually alter the electrical behavior in a way that leads to circuit malfunctions. Specially designed, easy to test and diagnose test structures are one method by which semiconductor manufacturers learn about which kind and how many defects actually affect the electrical properties of manufactured circuits. Such test structures are routinely manufactured and analyzed during all stages of process development and volume production to quantitatively monitor electrically relevant defect distributions. Test structures often are designed for the detection and analysis of specific defect types in specific process modules. Figure 5, for example, illustrates a single-layer serpentine structure useful for interline bridging defect analysis. Figure 5: Single-layer serpentine test structure for bridging defects [5] Regular and simple test structures like the one shown in Figure 5 greatly simplify the characterization of specific defect distributions, but lack the comprehensiveness of the wide range of circuit structures and layout features used for actual product designs. Hence, more sophisticated test chips combine a wide variety of test structures and may even be tailored to more closely represent the circuit and layout features of particular product designs. The Characterization Vehicles described in [1] are examples of highly sophisticated defect monitors. Defect Learning: Product Designs Product design substructures and complete product designs can also be useful for defect learning, particularly if the production test methodology is appropriately enhanced for that purpose. To that effect, the productions tests must be able to detect the presence of relevant defect populations and produce sufficient diagnostic data about the fail mechanisms. The diagnostic data must be logged and subsequently analyzed to determine their root causes. This analysis entails characterizing and localizing the likely root cause area (that is, finding the signals in the electrical circuit schematic that are most likely associated with the root cause), and then physically finding the defect (that is, if necessary, de-processing the indicated area to visually find the defect). Detailed Electrical and Physical Failure Analysis (EFA/PFA), tend to involve expensive lab equipment and manual effort by highly skilled Failure Analysis (FA) or silicon debug specialists. Memory Diagnostics and Bitmapping Their dense, yet regular structure that can be very sensitive to defects and at the same time simplifies diagnostics has made memories an industry favorite for defect monitoring and learning. Stand-alone DRAM products for a long time were the production vehicles of choice for yield ramp and process monitoring. With logic products (e.g., micro-processors) nowadays often leading the introduction of new technologies, the emphasis has shifted to embedded memories. In direct access test methods the Automatic Test Equipment (ATE) can directly access the boundary of the embedded memory macros such that the memory test resources of the ATE can be utilized. These ATE resources may include sophisticated programmable pattern generation hardware as well as real-time fail data logging and diagnostic features. In many practical applications memory Built-In Self-Test (BIST) is replacing direct access memory testing. Memory diagnosis involves exciting the defects such that they cause the test to fail and collecting detailed fail data during test in a process called bit-mapping. If BIST is used, the BIST methodology has to provide enough flexibility and sophistication to match the defect detection capability of ATE. As new process nodes can create new and sometimes unpredictable failure modes, the current trend in advanced BIST development is to improve the timing capabilities (to detect subtle timing fails), richness of test algorithms (to detect known subtle fail modes), and programmability (to allow for adjusting the test to emerging fail scenarios) of the on-chip BIST engines. In addition, more emphasis is given to data logging features for extracting detailed bit-level fail data from the chip. Where applicable, some fail data processing has to be moved entirely on-chip. For example, repairable memories with redundant spare rows/columns for test cost reasons tend to require real-time fail data analysis during productions test to determine how to program the spare address re-mapping data (e.g., fuse block). Neither does typical logic ATE have the real-time redundancy analysis features nor is the data-logging from BIST fast enough for real-time processing. Hence redundancy allocation is moved on-chip. Data logging for failure analysis, by contrast, does not have to be entirely real-time, and some practical trade-offs between data-logging bandwidth and on-chip hardware complexity/overhead can be made. Figure 5a shows some hardware components that make up a modern micro-coded memory BIST engine with on-chip 2-dimensional (both, spare rows and columns) for embedded DRAMs. Figure 5a: High Level Diagram of BIST Engine of Embedded DRAMs with 2-Dimensional Redundancy [11] (ed. note: will insert better picture) Regardless of whether external test equipment with direct memory access or BIST is used, the test algorithms address the memory in terms of the logic address space of the memory. The initial fail data, hence, consist of logic word/bit errors. To be more useful for defect learning, the logic word/bit data have to be translated into physical row/column data. This involves understanding the physical memory architecture and any address/bit scrambling that is implemented. The bit-map results can be displayed in logical and/or physical array form for each failing chip or for a full wafer. The pattern of failing bits in the memory array can be an indicator for the type of defect to look for. A defect in a single cell will in general create a different fail pattern than, say a bit/word-line or address decoder problem. Figure 5b shows some physical fail bit map patterns related to different fail modes. Vertical Pair: Bit Line Contact Partial Column: Resistive Bit Line Short Multi-Row: Address Decoder Swatch: CMP Scratch Entire Bit: Sense amp, I/O Catastrophic: Timing Circuit Figure 5b: Physical Memory Fail Patterns and Related Fail Modes [12] If full physical information including the wafer map is available, then the fail bit maps can be translated from chip-level coordinates to wafer-level coordinates for comparison with defect maps captured by in-line inspection equipment. An example is shown if Figure 6. Figure 6: Correlating defect maps (left) and memory fail bit maps (right) at wafer (top) and intra-chip (bottom) levels [6] This example shows that many different pieces of information contribute to efficient defect learning. For the example at hand, fail bit map data logs from the test equipment are needed together with complete logical/physical information (chip-level and wafer-layout), plus defect maps from wafer processing. Besides having access to the different data types, some logistical effort is involved to make sure that inspection and bitmapping can be performed on the same wafers. The example reveals a matching pattern in the defect map and the physically organized bit map, which likely indicates that individual bit cells are defective. In addition to visualizing the bit and defect maps, modern bitmapping tools can accelerate subsequent detailed failure analysis by helping to quickly and automatically navigating FA and/or debug analysis equipment like microscopes, probe stations, or FIB machines to the areas of interest. Logic Diagnostics and Bitmapping As with memories, defect analysis from logic production test fails is only possible if the relevant defects are actually detected by the production test patterns. And, similarly, advanced logic test methodology developments try to improve the timing accuracy (to detect subtle timing fails), enhanced and more flexible fault models (to force the Automatic Test Pattern Generation, ATPG, tools to generate more stringent tests for relevant logic failure modes), and better fail data logging (to enable large-scale fail data collection during production test). One good example of the trend to enhance the tests for relevant fail mechanisms is to derive fault models from the physical design. Figure 6a shows a representative tool flow for extracting realistic bridging faults from the physical design. SPICE Netlist LVS HDL Netlist

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The effect of localizing the design and production of Meta-text based e-book on the levels of student learning and retention

Introduction: Application of ability and capabilities of modern educational technology is an opportunity to achieve effective and optimal learning. Also, localization is the consideration of indigenous knowledge in order to accumulate global knowledge of local needs and desires. In this research, with the native approach and according to the existing need, first In this research, the e-book، d...

متن کامل

L2 Learners' Vocabulary Learning: Differential Effect(s) of Comprehension-Based vs. Production-Based Proactive/Reactive Focus on Form

This study aims to compare the effects of four types of FFI on second language vocabulary learning. To do so, the study adopted a quasi-experimental pretest-posttest design, including five groups, each receiving a distinct treatment. The participants were 80 fourth-grade male students ranging in age from 17 to 19. Before the treatment phase, the participants took a researcher-made test of vocab...

متن کامل

بررسی تأثیر روش آموزش مشارکت مستقیم استاد و دانشجو بر فرایند یادگیری در درس مقدمات طراحی معماری (1)

Finding the effective method for teaching basic courses of architecture is the main problem in this study. Accordingly, the objectives of this research are investigating the impact of a new method for teaching the architectural design basics (module I), entitled “DCIS” (Direct Collaboration of Instructor and Student), and assessing the students’ learning process through evalua...

متن کامل

Operation Sequencing Optimization in CAPP Using Hybrid Teaching-Learning Based Optimization (HTLBO)

Computer-aided process planning (CAPP) is an essential component in linking computer-aided design (CAD) and computer-aided manufacturing (CAM). Operation sequencing in CAPP is an essential activity. Each sequence of production operations which is produced in a process plan cannot be the best possible sequence every time in a changing production environment. As the complexity of the product incr...

متن کامل

Impact of Consciousness-raising Task and Structure-based Production Task on Learning Comparative and Superlative Forms by Iranian Elementary EFL Learners

This study aimed to investigate the relative effectiveness of consciousness-raising tasks and structure-based production tasks in comparison with the traditional teaching in learning comparative and superlative forms, following a task-based approach to teaching English grammar. To this end, from among 82 female elementary-level high school students having taken a Solutions Placement Test (2010)...

متن کامل

Impact of Consciousness-raising Task and Structure-based Production Task on Learning Comparative and Superlative Forms by Iranian Elementary EFL Learners

This study aimed to investigate the relative effectiveness of consciousness-raising tasks and structure-based production tasks in comparison with the traditional teaching in learning comparative and superlative forms, following a task-based approach to teaching English grammar. To this end, from among 82 female elementary-level high school students having taken a Solutions Placement Test (2010)...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006